Brains on wheels: Mobile robots for brain research

نویسنده

  • Herbert Jaeger
چکیده

Two mysteries of brains are focussed in this article from a roboticist's view. The rst is the brain's capacity to make sense of real-life sensoric input, which di ers strongly from experimentally administered stimuli. Another riddle posed by biological brains lies in the fact that brain subsystems are functionally indeterminate when seen in isolation, and acquire di erent functions when they interact with di erent other subsystems and receive di erent sensor input (\multi-functionality"). The paper describes insights gained in designing robots which directly relate to these two fundamental questions. Experiences with a \behavior-based" robot, the Black Knight (BK), are reported in some detail. BK had to work in an environment which was so messy that a formal modeling of the sensoric stimuli expected for the robot was out of the question. BK's control program makes do without explicit representations of external circumstances, and relies on \acting things out" instead. This leads to a deeper understanding of the real-life input problem. With respect to the multi-functionality problem, BK o ers surprising insights since it is construed and programmed as a dynamical system comprised of many open subsystems (\behaviors"). Open dynamical systems intrinsically are functionally indeterminate. While the reported methodology and results are of some interest for their own sake, the superordinate goal of this paper is to promote robot experiments as a tool for brain research. 1 Two things we don't understand about brains In this article I want, (i) to put my ngers on what I perceive to be two blind spots in current biocybernetic models of vertebrate brain systems (e.g., Yao & Freeman 1990, Ewert et al 1994, Corbacho & Arbib 1995, Grossberg 1995), and (ii) to vote for robot experiments as a way to sharpen our eyesight. In order to add empirical substance to methodological argument, the main body of this contribution reports on experiences with a little robot, the \Black Knight". The two blind spots I want to address are (i) the nature of real-life sensoric input, and how the brain makes sense of it, and (ii) the fact that a given neural structure can participate in many functions. I will now explain these two points in more detail. Real-life sensoric input: The input with which biocybernetic models are tested in simulations or physical experiments is typically highly standardized. Examples are regular bar patterns; \smells" encoded as constant activation levels of simulated receptor cells (Yao & Freeman 1990); prey worms positioned on a laboratory surface behind a barrier of vertical bars in standardized distances (Corbacho & Arbib 1995). Such stimuli, simulated or physical, have in common that they can be succinctly described by the experimentator. That stimuli can be described is an obvious precondition for repeatability of experiments (and for writing papers). However, it is doubtful whether real-life input to physical brain subsystems is describable with so many words. Real-life sensor input is high-dimensional, varies on many timescales, has very strong stochastic components; in almost every formal aspect, it is ill-behaved. For instance, a toad approaching a worm in the wild will be confronted with a \pattern" which is, according to circumstances, glistening or dim, darker or lighter as the background, vanishing or appearing, twitching or immobile, big or small, brownish or blackish or reddish or brightly yellow, segmented or smooth, etc., etc.; and to make things worse, the \background" su ers from the same untamed variability. Furthermore, many of these aspects may or may not change on any odd timescale. The very notion of a pattern is, in fact, challenged by this variability. Of course, invariants of some sort must exist (otherwise the toad would not be able to catch a worm now and then). But I don't think that the challenge of modelling such \undescribable" invariants has yet been acknowledged by many (a rst step is Bingham's (1995) investigation of qualitative di erences between dynamical light point patterns). Multiple functions: Biological neural systems typically serve many di erent functions. Their functional exibility ba es investigators in that for nearly every functional mechanism hypothesized for a given neural system, data can be collected which conrm that hypothesis. A striking example is the cerebellum. It seems to be a general hypothesis-veri cation device. For almost every motor control mechanism devised by control theorists, empirical recordings can be obtained which demonstrate that the cerebellum indeed implements this mechanism (compare e.g., Bloedel 1992, Miall 1993, Kawato & Gomi 1992). This multi-functionality of real brain subsystems forms a striking contrast to biocybernetic models. They are typically made of modules which serve a single purpose each.1 A related observation is that biological brains are quite robust in the face of (at least some) parameter changes. A drugged brain will change the way it functions, even quite drastically, but it often will still function in a way that makes some sense, if not the sense it made before. For an illustration, consider how human behavior changes when subjects are drunk. By contrast, biocybernetic models are often highly sensitive with respect to the setting of parameters. Typically, the (single) functionality of such a system breaks down when the model's parameters leave a narrow margin, { and this functionality is not replaced by another. In this situation, where epistemological consequences are as intransparent as the situation itself, it is interesting to see how roboticists manage. They have somehow to make their robots cope with the messiness of real-life sensor input. Furthermore, as will be explained in this article, some modern strategies of robot design also inherently run into the problem of multi-functionality. The article is organized in the following way. The next section provides an introduction to \behavior-based" robotics, a relatively young approach to building mobile robots which I believe is particularly relevant for brain research. In section 3 I describe a robot and its control program, which is designed in the behavior-based tradition, using di erential equations as a speci cation language. The main portion of this section, and of the entire article, is a detailed discussion of what can be learnt from this robot with respect to the questions of real-life sensoric input and multiple functions. In the nal discussion (section 4) the arguments from preceding sections are revised and bundled. 2 Behavior-based robotics Many strands in academic research and industrial development are called \robotics". In this article, I refer to a particular, relatively young tradition variously named \behaviororiented [or behavior-based] robotics", \bottom-up robotics", or \New AI". Behaviororiented robotics has arisen in a multi-disciplinary context between classical robotic manipulator engineering, arti cial intelligence, cognitive science, ethology, biology, and \arti cial life" research. This particular kind of robotics has a strong avor of basic research, although recently decisive steps toward economically relevant applications have been taken (e.g., the sewage inspection robots developed in the group I belong to, cf. Kirchner 1996). An ancestor of behavior-based robotics is Braitenberg's (1984) famous booklet on \vehicles"; good contemporary introductions are Brooks (1991) and Steels (1993); a comprehensive textbook has recently been written by Pfeifer and Scheier (1996). The goal of research in this eld is generally to build and understand (in this sequence) mobile \agents" that can swiftly and robustly cope with the messiness of physical environments. Naive observers are often amazed how strongly behavior-based robots remind them of live insects: they are typically quite small, agile, fast (compared to their size and other kinds of robots, that is...), and almost incessantly active. Cooperations with ethologists and biologists are common, especially with insect researchers (Deneubourg et al 1991, Cruse et al 1995). The eld is tied together by several methodological commitments (thoroughly explained in Pfeifer & Scheier 1996). Out of these principles, I pick three which are particularly relevant for connecting behavior-oriented robotics with brain research: 2 Integrated agents: In the perspective of behavior-oriented robotics, it does not make much sense to investigate single information processing faculties like \vision", \knowledge representation", or \motor control". It is believed that every \mechanism" in an agent is intimately related to almost every other, and even more, that every mechanism is related to the body's hardware; furthermore, it is stated that an embodied intelligent information processing system can only be understood (and designed) when the agent's overt motor behavior is integrated into the picture. To push the integrative view even further, it is also claimed that the agent's goals and motivations must be included into the picture, and, of course, the characteristics of the environment the agent is adapted to. Thomas Christaller calls this the \oyster principle" (personal communication): one has to swallow the thing wholesale, or not at all. The emphasis on integratedness has two immediate consequences. First, each of the agent's \behaviors" (like walking, recharging, picking an object) is conceived as an integrated entity that covers the complete internal pathway from sensing to motor command generation, including (possibly) specialized representations and motivational mechanisms. This di ers from the classical arti cial intelligence (AI) approach to robotics, where sensor stimuli are rst processed into a general representational format which is then globally available for further use. Second, a behavior is not considered in isolation. The agent is viewed as a system of many interacting behaviors, indeed, as a system where all potential behaviors potentially interact (Steels 1993). Thus, a research focus of behavior-based robotics is to investigate integrated, complete behavior systems. This brings this kind of research, again, into close contact with ethology (Tyrrell 1993). Figure 1: The \Black Knight" approaching a pushbox (cylindric device at left margin, emitting a vertical bar of moderately bright light) which stands besides the charging station (brightly lit brick construction). Balanced design: The \insect robots" typically built by behavior-oriented roboticists are more often than not simple contrivances, compared to the standards of more traditional schools of robotics. Fig. 1 shows a robot (the \Black Knight") built by members of the Luc Steels' AI Lab at the Vrije Universiteit Brussel. Its body is 3 made from Lego bricks; its sensor equipment consists of four bump contacts, three active infrared sensors, two wide-angle photodiodes and another two narrow-angle photodiodes; it has two drive wheels with independently controlled motors; and a microcontroller control board (Motorola 68332 based) which has also been developed at the VUB AI Lab1. The Black Knight's task is to wander around in an arena limited by plywood walls; avoiding obstacles while wandering; doing \work" by knocking into light-emitting \bushboxes" scattered in the arena (the pushboxes get gradually dimmed when knocked into a few times, which tells the robot to look for the next pushbox); recharging at a power station when the the batteries run down. Fig. 1 shows a view of this setup. This environment has been co-developed with the ethologist Dave McFarland (1994) as a basic model of an ecosystem, and there is more to it from that side than I can mention here. The apparent simplicity of the Black Knight, its cheap design, is explicitly wanted. It is su cient for the robot's task. Remember that in the \integrated agents" perspective, the agent is always considered with respect to its environment and its tasks. The idea of a balanced design means that the robot's hardware, sensing and motor capabilities, and its control program, should all be just of the right complexity with respect to each other, and generally as simple as possible. A nice consequence of such cheap design is that these robots are often quite lightweight compared to classical robots. The value of low weight can hardly be over-rated. Low weight enables faster motion, simpler motor control, longer operation time per battery load. Such innocent factors add up in practice to a qualitative jump: the motions of behavior-based robots convey, to the naive human observer, an impression that is more \creature"like than machine-like. It should also be noted that the relative low complexity of robots like the Black Knight already amounts to the maximal complexity that can be understood by a single human designer { or be developed by evolutionary techniques, an central theme in the behavior-based camp (Harvey et al 1994, Thompson 1995). Natural environments: According to the tenets of behavior-oriented robotics, robots should be designed to cope with messy environments. If at all possible, the robot should be adapted to the environment. It is considered bad taste to adjust the environment to the robot, although to a certain degree that is unavoidable. Ideally, the research strategy is to start with robots that perform simple tasks in complex environments, not vice versa, as in classical AI-style robotics. In the case of the Black Knight in its VUB AI Lab arena, this implies that no standardized lighting conditions are provided. This puts a heavy load on the robot's adaptation, because (i) its behavior is guided almost entirely by three kinds of simple light sensors, and (ii) the ambient light in the arena changes dramatically: it can be anything between direct sunlight in the morning, and relatively dim neon lighting from the adjacent student working places in the evening. This situation forces the robot designer to face head-on the \real-life" sensoric input variability mentioned in the previous section. These three methodological commitments (and others not mentioned) put the behaviororiented roboticist in a tight place. In fact, he is facing the two problems mentioned in 1Consult http://arti.vub.ac.be/~cyrano/robot home.html for further information 4 the previous section, real-life sensor input and multiple functions. While the former is obvious, the latter needs a brief explanation. Why and how do behavior-oriented roboticists face the multiple function problem? A rst observation is that obviously they are not confronted with this problem on a neurophysiological or neuroanatomical level, because they are not investigating biological brains. They are instead confronted with the multi-functionality problem on the more abstract level of sensorimotor control dynamics. If every behavior potentially interacts in an integrated fashion with every other behavior { as the commitment to integratedness will have it {, then each behavior in itself must be a fundamentally adaptive entity, which might have to change its dynamics qualitatively according to the current activity of other behaviors (and, of course, according to sensor input). In other words, a behavior must in principle be able to change its own way of functioning. In this sense, behavior-oriented robotics faces the problem of multi-functionality. In the next section, I will describe some aspects of a control program for the Black Knight, which I was given the opportunity to use and test in the VUB AI Lab. It will become clear from the example that, in fact, the problems of real-life sensoric input and multi-functionality are closely connected. It will also, I hope, become clear that this kind of robotics research is fertile in generating ideas which might be of use in the vastly more complex and inaccessible realm of biological brains. 3 Two things I start to understand about the Black Knight In this section, I will describe how the Black Knight (BK) copes with the variable lighting conditions in the VUB arena in a way that involves not isolated \sensing" subsystems but the entire, integrated behavior system. Before I can go deeper into these matters, however, it is necessary to explain the basic design rationale behind BK's behavior control system. Therefore, this section has two parts. In the rst subsection, I describe BK and its control program in some detail. Then, in the second subsection I proceed to describe BK's performance, and what I have learnt from it. 3.1 How I designed BK The control program that I implemented on BK was written according to the \Dual Dynamics" (DD) design scheme. DD is a formal framework for designing control architectures for mobile robots, which allows the designer to write down behavior speci cations in a format of ordinary di erential equations. The basic assumption for DD is that a situated agent works in di erent modes. A mode is a particular \tuning" of the entire sensing and acting system which enables the agent to behave in a swift and integrated fashion in a particular class of situations (where a \situation" is characterized both by external circumstances and internal motivational parameters). The notion of modes has many facets. It is related to behavior systems in ethology (Baerends 1976, Tyrrell 1993), i.e. patterns of activities that can be identi ed on statistical and functional grounds. In animals, modes are often linked to relatively slowly changing somatic (hormonal, neurological) conditions. Modes can serve internal needs, e.g. by activating feeding behavior. Modes also allow the agent to deeply tune in to situations, by establishing particular, adaptive ltering and expectation mechanisms for perception, by facilitating particular motor responses and inhibiting others, etc. 5 While the notion of modes is hard or impossible to de ne precisely on a phenomenological level, it can be precisely de ned within a particular mathematical model. When the agent's behavior control system is viewed as a continuous dynamical system, modes correspond to phases, i.e., regions in some space of control parameters where the system exhibits unique qualitative properties. From a mathematical view, transitions between phases (and hence, modes) are bifurcations. In its entirety, the DD scheme can be sketched as in g.2a. Figure 2: (a) Global structure of a DD behavior control system. At any time, every behavior has an activation. Activations of higher-level behaviors (depicted in shaded boxes) act as control parameters for the activation dynamics of lower levels. The dynamical system which maintains a behavior's activation can undergo bifurcations; this in indicated by depicting these systems as stylized phase diagrams (boxes with irregular partitions). A mode of the entire system is thus determined by the activations of all higher-level behaviors. (b) The target and activation subsystems of an elementary behavior. The basic building blocks of a DD control architecture are behaviors. They are ordered in levels. At the bottom level (level 0), one nds elementary behaviors: sensomotoric coordinations with direct access to sensors and actuators. Typical examples are move forward and turn left. At higher levels (level 1, 2, etc.), we nd increasingly complex behaviors. Like elementary behaviors, they have direct access to sensor data, but unlike the former, they have no direct access to actuators. Typical examples of high-level complex behaviors are work and replenish energy; at an intermediate level one might nd complex behaviors like roam or dock-at-charging-station. Complex behaviors serve the regulation of modes. They correspond to particular aspects of modes. The higher the level, the slower the time scale of aspects regulated on that level, and the more general those aspects. For instance, work realizes a long-term, general mode, which can, for shorter time spans, be further 6 modulated by intermediate-level complex behaviors like roam. Behaviors { elementary and complex { always maintain a current activation value which ranges between 0 and 1. It is regulated by a dedicated dynamical system, the behavior's activation dynamics. This system can undergo bifurcations, which yields the basic mechanism for changing modes. Accordingly, in g.2a, the dynamical systems responsible for the activation of behaviors are rendered as stylized phase diagrams. Fig. 2b shows an elementary behavior in more detail. It indicates how the activation value maintained by a behavior's activation subsystem gates the actuator command values generated by the target dynamics subsystem. A mode characterizes the robot's behavior control system in its entirety. Mathematically, a mode is de ned by the phases of the activation dynamics of all behaviors. In turn, these phases are determined by the activations of all complex behaviors, in that the activations of complex behaviors on some level serve as control parameters for the activation dynamics on the next lower level (downward arrows in g.2a). Behaviors on the highest level are always in a single phase, i.e. they don't bifurcate. An important characteristic of DD architectures is that elementary behaviors are not \called to execute" in a top-down fashion. Rather, the elementary level is \tuned" as a whole by top-down control parameters. It would remain operative fully on its own even when the connections to higher levels would be cut; it just couldn't adapt to changing circumstances any more. The formal speci cation of DD control systems is given in appendix A. DD di ers in its intentions from speci c biocybernetic models of behavior control in particular species (like Ewert et al 1994, Corbacho & Arbib 1995) in that those models provide concrete accounts of only a section of the animal's behavioral repertoire, whereas DD is an abstract scheme for complete behavior repertoires. From abstract ethological models of complete behavior systems (eg. Tyrrell 1993), and related behavior system models developed in arti cial intelligence and robotics (e.g. Brooks 1989, Maes 1990), which are all essentially discrete models, DD is distinguished by the fact that it captures the agent's continuous, ongoing interaction with the environment, and does not require the agent to be able to discretely classify environmental states. The DD scheme harmonizes with several biocybernetic models and biological ndings. For instance, the modulation of spinal re ex systems by descending commands described by McCrea (1992) can be interpreted as a top-down induced bifurcation of an elementary behavior level. The GO signal for gating planned movements as described in Grossberg's and other's models for voluntary movement control (Bullock & Grossberg 1988) corresponds to a behavior's activation variable. In the face of such seductive correspondences I wish to stress that I developed the DD scheme solely with the goal in mind to achieve an as transparent as possible \mathematical engineering" scheme of complex behavior systems. No biological plausibility in the sense of mapping DD to biological neural control systems was intended. I understand the fact that such mappings are partially possible as just another demonstration of the enormous richness of Neural systems { they are so rich that almost every conceivable man-made control system will correspond to some aspects of them. Therefore, I wouldn't dare to draw lessons for brain research by mapping DD on concrete physiological mechanisms. Rather, DD and the Black Knight help me to learn about general principles which hold for every continuous-dynamics, complete behavior control system guiding a situated agent, 7 man-made or alive. There are signi cant such insights which can be gained easier with robots than with animals. So much for a brief introduction to the background philosophy of the program installed on the Black Knight. A detailed tutorial on DD (Jaeger 1996a) is available2. BK's control system has been devised in the form of di erential equations, which were then implemented in PDL, a C-based programming language developed at the VUB AI Lab to run continuous dynamics on the robot's microcontroller. The heavily documented program code is available electronically3 . 3.2 What BK taught me I will now describe how BK copes with the messiness of light stimuli in a particular situation, namely, when it approaches a pushbox guided by phototaxis. The pushboxes emit a frequency modulated light in the visible red range. The signals yielded by the robot's two narrow-angle photodiodes are postprocessed in a hardwired fashion to respond only to frequency modulated light, and hence, to enable the robot to spot the pushboxes among other lightsources. Let us name the readings yielded by these modulated light sensitive receptor devices Ml and Mr, for the left and right photodiode \eye" respectively. Unfortunately, the demodulation postprocessing is infested with imperfections, such that Ml and Mr reach considerable amplitudes even when the sensors are directed toward nonmodulated sources of bright white light for instance, the charging station, or (in the morning) the Lab's windows. Thus, without taking further measures, the robot would frequently head into nonsensical directions in order to knock into nonexistant pushboxes. A part of the solution is to exploit that the wide-angle photodiodes are not very sensitive to the modulated red light emitted by the pushboxes. Hence, when the robot is heading toward some bright white light, both the narrowand the wide-angled photoreceptors will read high, whereas when the robot drives toward a pushbox, only the narrow-angled ones will \ re". Therefore, the signals from both kinds of sensors are combined into new \ ltered" sensory quantities Mlf;Mrf which are basically the original readings MLl, MLr in a version which becomes suppressed by high readings of the wide-angle photodiodes. (Actually, all of these M quantities follow rises in raw sensor readings instantaneously but decay considerably slower than raw readings, thus taking on an short-term memory \afterimage" characteristics. For the present purpose however, we can ignore such temporal subtleties.) The sensor quantities Mlf;Mrf might now be used to guide taxis toward a pushbox like in a Braitenberg vehicle: while Mlf > Mrf (i.e., the pushbox is seen more brightly in the left \eye"), accelerate the right wheel and decelerate the left, which results in bending the robot's path to the left; and vice versa for Mlf < Mrf. However, this would not work satisfactorily, since the wide-angle photodiodes are also sensitive to infrared light. When the robot comes close to the pushbox, it re ects strongly the infrared emitted by the robot's own IR-based obstacle avoidance system, and this re ection would suppress Mlf and Mrf just like ordinary white light. Thus, Mlf;Mrf are of little use for guiding the nal part of the approach. 2consult http://www.gmd.de/People/Herbert.Jaeger/Publications.html 3ftp://ftp.gmd.de/GMD/ai-research/Publications/1996/jaeger.96.dd14.c 8 This problem is solved on BK in the following way. Remember that a behavior in the DD scheme consists of two dynamical subsystems. The activation dynamics is responsible for maintaining the behavior's overall activation, while the target dynamics computes the target trajectories for all actuators concerned. The pushbox behavior implemented on BK uses the ltered quantities Mlf and Mrf in its activation dynamics in order to alert the pushbox behavior when modulated light, but no other bright light, is perceived. However, for the target dynamics, the un ltered quantities Ml and Mr are employed. This does no harm since the behavior is inactive altogether when Mlf and Mrf are high; and it is bene cial in the nal phase of approach, since the infrared re ection is ignored (in g. 3, the rst drop of Mrf is probably due to this IR re ection). To give a bit more detail here, I include the pushbox activation dynamics, which maintains the behavior's activation Apbx, and the relevant portions of the target dynamics generating the target value Rpbx for the right motor: _ Apbx = 20:0 Awork ( (Mlf +Mrf) Apbx) + decayterm (1) _ Rpbx = 10:0 [: : :](Vd + 0:35 f(Ml Mr) Rpbx)[: : :] (2) In these equations, Awork is the activation of a higher-level work behavior, which like all DD activations varies in the range from 0 to 1; is a very steep sigmoid with turning point in 0.03; the decay term is a suitable expression that makes Apbx go toward zero if Awork goes toward zero, Vd is the default forward speed target signal for the motors, and f is an arctan-like switching function which is much atter than . One way to understand (1) is to observe that Awork jumps to 1 when (Mlf + Mrf) surpasses the very low threshold of 0.03, and is 0 otherwise. The jump speed is basically set by the time constant 20.0, but becomes modulated by Awork. If Awork goes toward zero, the decay term becomes dominant and pulls Apbx toward zero. (2) essentially lets Rpbx follow the trajectory of Vd + 0:35 f(Ml Mr) with time constant 10.0. I.e., if Ml > Mr (which might correspond to a modulated light source to the robot's the left), the right motor gets a target signal Rpbx which is greater than the default signal Vd. Since the left motor will analogously receive a signal smaller than Vd, the net e ect is a left bend of the trajectory (which is however the whole truth only when no other behaviors are active at the same time). The activation dynamics is generally faster than the target dynamics (time constant 20.0 vs. 10.0), and is highly sensitive to any readings of Mlf +Mrf that surpass the low threshold of 0.03. The rationale is that the robot \jumps into attention" when there is any indication of a pushbox, and remains attentive until Mlf +Mrf has dropped again below 0.03, which requires some time since Mlf and Mrf decay exponentially. Note that the motor target trajectory Rpbx generated by the pushbox behavior yields only one contribution among many others which are superimposed on each other to result in the command actually received by the motor. In particular, the fact that Rpbx is zero at the beginning of g. 3 does not imply that the robot is halting, since other behaviors will be active at that time (in this case, the roam behavior). This simpli ed description of BK's pushbox behavior should give some substance to the two main points of this section, which I am going to treat presently, namely, (i) that BK copes with real-life sensor input by sorting out its meaning through action, rather 9 Figure 3: A trace of some robot-internal quantities during an episode of two knocks into a pushbox. The quantity Rpbx is the target value for the right motor generated by the pushbox behavior's target dynamics. The diagram covers roughly 8 seconds and two knocks (consider the Rpbx trajectory in the middle section of the interval for an approximate trace of the robot's forth-and-back movement). than by trying to build a \representation" of the current world state, and (ii) that this acting involves the integrated agent. At a rst glance, it might seem that BK's ability to distinguish pushboxes from bright light sources is explained by the ltering mechanism which yields Mlf and Mrf. However, although some ltering of this kind is necessary, in itself it doesn't help to understand BK. Even the ltered quantities are quite unreliable, and have remarkably di erent dynamic characteristics in di erent lighting conditions. It would be a misconception to believe that Mlf and Mrf \represent" modulated light sources. The crucial fact that makes BK function is not that the robot computes reliable representations but that it almost always moves. When some above-threshold Mlf;Mrf reading comes in and Apbx shoots up, the robot does not halt to gure out where this interesting light came from, and then decide for an appropriate heading direction. What actually happens is that the robot bends its trajectory a bit toward the side where Ml or Mr is stronger. If the interesting light input was spurious, { which is often the case {, Rpbx and Apbx will swiftly die down again, and a small bend in the trajectory will be all that can be outwardly noticed in BK's motoric activity. BK's trajectory is most of the time determined by BK's roam behavior, which is BK's standard behavior in situations when other behaviors are not active. Roam's activation is suppressed by Mlf +Mrf, but on a timescale which is slower by an order of magnitude than that of pushbox's activation dynamics. Therefore, Mlf and Mrf signals must persist for some time (rougly .5 seconds) before pushbox completely takes over control from roam. During these 0.5 secs., the \bending force" toward the potential modulated light source will increase, and BK will smoothly turn towards it, and, if the Mlf and/or Mrf readings continue to come in steadily, display a swift phototaxis toward the maximum of the light. In this picture there is no moment of \action selection". Rather, the entire behavior 10 determination process is a nonlinear interplay of activations which continually unfolds itself while, and because, the robot moves along. Two further observations emphasize that phototaxis should best be understood from the perspective of ongoing motion. First, the motion dynamics of phototaxis toward pushboxes looks quite di erent in different lighting conditions. In malign conditions, (e.g. when the bushboxes are approached against bright sunlight), the pushbox behavior is reduced to a spurious \nodding" of the trajectory toward the pushbox, and succeeds only in rare cases when the robot more or less would have knocked into the box by accident anyway. In rare instances, when the pushbox behavior gets activated rather slowly but roam (or other competing behaviors) gets deactivated quickly, the robot seems to \hesitate" in the vicinity of a pushbox, slows down, and slowly turns toward it, almost on the spot, before accelerating again. Such \undecidedness" can also be the result of a con ict between the high-level modes work and recharge, which can transiently slow down lower-level behaviors. Of course, there are also cases when the pushbox behavior just works as the designer thinks it should: an elegant approach at normal speed, without any wiggling and re-orientation. Second, it turned out in testing BK that the biggest hurdle in getting the pushbox behavior to work satisfactorily was to get the various time constants in the di erential equations right (which includes sensor preprocessing, e.g. the computation ofMlf and Mrf, which is also done with di erential equations). Time constants regulate the temporal dynamics of the behavior control system's \execution"; they do not relate to \representations", at least not in any customary sense of the word. Representations are correct or incorrect; if anything, it is their \contents" that matters; it makes little sense to improve a de cient representation by speeding it up or slowing it down. However, for BK's quantities the latter turned out to be the crucial aspect. It is delicacy of timing of simple mechanisms, not re nement of representation, that makes this little a air with its four photodiodes run smoothly even in malevolent lighting conditions. I will now explain what I mean by saying that BK operates as an \integrated agent", by contrasting BK with the \fragmented" behavior control found in classical AI-style mobile robots. The classical AI approach to robot behavior control is to view a robot as a rational agent that pursues goals. Such an agent observes its environment, generates from the obtained observations a representation of the external world state; compares this representation with an explicit representation of the goal state that should be achieved; computes and updates a plan for reaching that goal, i.e. a sequence of actions to reach subgoals; selects and executes actions when appropriate pre-conditions are satis ed in the perceived world; nally, monitors the consequences of the actions and updates the world state representation accordingly (for concrete examples cf. Congdon et al 1992; a classical critique is Suchman 1987). A great portion of the designer's ingenuity goes into the speci cation of the symbolic inferences that operate on the internal representation. Planning and action selection are symbol manipulations which are (as operations) fundamentally decoupled from the incoming sensoric input. Even most of the updating of the world state representation consists of inferential housekeeping processes which are completely internal to the representation subsystem. A robot of this kind operates in what I would call a fragmented fashion, in that (i) the control program consists of modules each of which can be designed and understood quite for itself (e.g. sensor preprocessing, knowledge representation, world 11 state representation, planning, action selection, and motor control modules); and in that (ii) the actual behavior of such robots is designed, and can be understood, as a discrete sequence of actions. By contrast, BK operates as an integrated agent (i) in that none of the processes that make up BK's control program can be designed or understood in isolation, and (ii) in that behaviors smoothly (albeit quickly) blend in and fade out, sometimes get superimposed, and continually modify each other. Virtually all computations have an immediate e ect on the ongoing motor behavior. BK has no properly \internal" computations. Importantly, the immediate e ectiveness of computations on motoric activity also pertains to what one might call \motivational" processes. An example is the dynamics of the activation Ar of the recharge behavior. This quantity Ar, which one might intuitively understand as \appetite to refuel", or simply as \hunger", has a slow dynamics, which depends mostly on the current battery level, and occasionally on the \appetizing" perception of white light, as the light emitted by the charging station. Now, a small increase of Ar would result, among other overtly measurable e ects, in a slight slowdown of reaction time for phototaxis toward pushboxes (since Apbx is indirectly inhibited by Ar). Inconspicuous as this e ect may seem, it becomes integrated with dozens of other, likewise inconspicuous e ects, and a handful of more obvious ones, into the current motor activity. It has become a very palpable experience for me that a precise timing is important even for such slow, high-level \motivational" quantities. Walking obviously gets grossly disturbed when the timing and relative magnitudes of signals to right/left leg muscles get into wrong relative phases; on a more long-term scale, comprehensive behavior patterns like full work-recharge cycles become likewise disorganized when when the timing and relative magnitude of \motivations" isn't properly tuned. It might seem at a rst glance that the integral nature of BK's control system makes design much harder than would be the case with more classical, modularized systems. This is only partially true. Doubtlessly, the required tuning of BK's many time constants is a chore which is absent from classical designs. But this extra e ort is compensated by the small overall size of BK's control system. It consists of some mere 25K of C code. Keeping integrated systems small is possible since no subprocess ever needs to work alone. Useful functionality and robustness are global properties of the integrated system. By contrast, modularized systems are necessarily bigger since each module must serve a completely speci ed functionality, and must be equipped with its own explicit interfacing, housekeeping and failure management, in order to warrant its local robustness. Global functionality and robustness often require additional scheduling and recovery management modules. I conclude this section with some remarks on how BK's control system relates to multi-functionality, one of the methodological \blind spots" addressed in the rst section. A key observation is that in BK's control program behaviors are implemented as dynamical systems, not as algorithms. An algorithm, by de nition, computes a function, i.e. realizes a discrete input-output mapping. Viewing complex information processing systems as a network of algorithms is the perspective of classical AI, cognitive science, and the \physical symbol systems" (i.e., computer) metaphor of brains (cf. Vera & Simon 1993 for a recent defense of the classical perspective). This view is also connected to the popular model of brains as \societies" of \agents" (a classic is Minsky 1985, a recent critical overview is Woolridge & Jennings 1995), 12 inasmuch as (software) agents, like algorithms, are encapsulated entities which accomplish speci c tasks. The algorithmic perspective is present whenever feedforward arti cial neural networks are used to model brain functions, or when repeatable responses of biological neurons to well-speci ed, repeatable stimuli are sought. The algorithmic perspective is compelling for the human reseracher, since it allows to break down a complex system into modules which can be understood individually. The drawback is that it fundamentally bars an understanding of multi-functionality. By contrast, the dynamical systems that make up BK's behaviors cannot be understood in terms of input-output mappings, and in designing them one is forced to confront multifunctionality. In stating that dynamical systems fundamentally di er from algorithms, I am touching on an issue which tends to create a lot of confusion. Many facts exist which seem to blur this distinction, e.g. that dynamical system can be simulated on symbolic processors, that a computer is a physical dynamical system, that the notion of an algorithm has recently been furthered by mathematicians to encompass certain continuous types of computation, that the abstract theory of dynamical systems treats symbolic processes as dynamic systems, etc. I cannot adequately deal with these matters here, but must refer the interested reader to Gelder's (1996) painstaking methodological inquiry, where the di erence is worked out in a way to which I fully subscribe. The proper way to understand a dynamical system is to consider how it changes itself, i.e. its state. This is obvious for autonomous systems (in the mathematical sense of autonomy, i.e. of a formal system without input). The dynamical systems that make up BK's behaviors are not autonomous in the mathematical sense, since time-varying input terms appear in the right hand side of their equations. The proper way to understand such open systems is more complex: one has to consider how it changes changing itself, depending on the input administered. But still one has to understand how the system changes itself. This is completely di erent from how one has to understand algorithms. The fact that open systems develop their dynamical properties only in interaction with (input-providing) other systems, has been stressed by H. Shimizu. In order to emphasize this fact, he has termed such systems \relative" in a paper where he explored philosophical consequences for understanding biological systems (Shimizu 1993). A basic mathematical analysis of the dynamical complexity of coupled \neuromodule" networks, investigated under the auspices of relative systems, is reported by Pasemann (1994, 1996). The author uses the term \autotrophic systems" to denote their \functional indeterminacy" when they are viewed in isoloation. The well-known forerunner of these recent mathematicoempistemological conceptions is the notion of \autopoietic" systems (Maturana & Varela 1984), which however focusses much less on a system's interactions with other systems. For an elementary example from the Black Knight, consider the system (1) that regulates Apbx . If the input parameters Awork and Mlf+Mrf were frozen at suitable values, the system would be autonomous and could be understood by noticing that its state variable, Apbx, relaxes to the single attractor point (Mlf+Mrf) with an exponential time constant 20:0 Awork. However, in practice the input parameters never are frozen, and the system (1) must be understood by considering how this relaxation dynamics itself changes. In using (1) for a robot control program, one should be aware of such things as, for instance, (i) that if Awork is appreciably greater than zero, the system is dominated by its rst term, and can be understood as a P-controller for the target Mlf +Mrf; or, (ii) that if 13 Awork is near zero, the system is dominated by the decay term; or, (iii) taht if Awork is appreciably above zero and (Mlf+Mrf) varies on a broad frequency spectrum (which can occur when the robot moves with a high turn rate), the system can best be understood as a low-pass lter for (Mlf+Mrf). Note that there is no bifurcation involved in the changes between (i), (ii), and (iii): if Awork and (Mlf +Mrf) are formally interpreted as control parameters, (1) is a single xed-point attractor system over the entire relevant range of control parameters. These di erent \ avors" of (1) appear when the input quantities Awork and Mlf +Mrf assume particular value ranges or temporal characteristics. Of course, a mathematically experienced person will instantly know from the simple formula (1) that this system can behave like (i), (ii), and (iii). But in brain research, one does not know the \formula" of the patch of neural tissue one observes, and one observes it only under quite particular values and temporal characteristics of input to that system (even worse, it will often be unclear what actually is the \input" to the system). What one observes, then, is only one particular way of functioning of this piece of tissue, analogous to obtaining only one of (i), (ii), or (iii). From here it is not a long way to the \1 module = 1 mechanism" fallacy, when one claims to have determined \the" function of a piece of neural tissue. I am not a brain researcher, so I cannot judge how much damage this fallacy really does. As a relative outsider, however, I might report that I found certain controversies which I found in the literature about \the" function of the cerebellum remarkably unproductive. In the light of the lessons told by BK's simple subsystem (1), I would nd it a more productive approach to accept this organ's multifunctionality and try to understand how the many reported cerebellar control and lter mechanisms co-exist and interact with one another, and how they depend on sensoric input conditions. In this context I would like to re-emphasize that the computer metaphor of the brain, and the algorithmic metaphor of biological information processing that goes with it, are likely to blindfold us for e ects of multi-functionality. Multi-functionality is not a well-de ned term. The above example demonstrates that a system can display \qualitatively di erent" behaviors without undergoing bifurcations, only due to di erent properties of the input it receives. Much more drastic examples of this kind are resonance phenomena that can arise in oscillatory systems with oscillating input (for an example, again without bifurcations, see Jaeger 1995, where I called the phenomenon \qualitative disruption"). When one thinks of multi-functionality, however, one will most likely think of bifurcations more than of other e ects. In fact, the dual dynamics design scheme arose from the need to let bifurcations of subsystems happen in a transparent fashion (cf. the overview at the beginning of this section). In the standard mathematical perspective, bifurcations are induced by changes in control parameters, which are assumed to be either xed or to have a much slower dynamics than the system they control. However, in biological and robot behavior control systems, a clean separation in timescales between control parameters and the system dynamics proper rarely exists. The DD approach tries to pro t as much as possible from the mathematical clarity of the concept of control parameters and bifurcations, in that it makes time scales the crucial property to separate a behavior control system into \levels". Higher 14 levels are, by de nition, slower levels. Only as a secondary step is this technical distinction in timescale interpreted in more familiar terms, e.g. by loosely saying that higher levels are \motivational" (assuming that motives are slow compared to the actions they motivate), or that higher levels contains \comprehensive" behaviors (since a comprehensive behavior, like work, necessarily is active longer than the behaviors it comprehends, like approach pushbox). With respect to bifurcation-related multi-functionality, BK and DD have taught me two lessons, one general and one more speci c. The general one: Taming bifurcations is possible, at least in a system as simple as BK. BK runs well, and was quite easily designed and debugged, grace to its DD architecture which puts bifurcation in the center of the designer's attention. This is of course not an indication that biological systems are organized similar to BK, but it is an indication that focussing on bifurcations is a healthy entry-point for constructing formal models of complete behavior systems. This insight might be a valuable orientation for other researchers who may wish to develop dynamical systems models which go beyond DD. The speci c lesson of BK and DD is that it su ces to let the activation dynamics bifurcate. One does not have to bother about bifurcations of the target dynamics. This is potentially quite a practically helpful idea, since the target trajectory is as high-dimensional as the actuators number, whereas the activation dynamics is only 1-dimensional. 4 Discussion The general purpose of this article was to promote mobile robots as a tool for brain research. The reason why this makes sense is (i) that both mobile robot control systems and biological brains have the same basic kind of task to accomplish, namely, to generate motor action from sensoric input in an agent-environment interaction loop where the input is continually changed and shaped by the very motor action, and (ii) that robot control systems are much more accessible than biological brains. Thus, generally speaking, mobile robots are epistemologically convenient \brains on wheels". Compared to simulation studies, robots have the advantage that they have to cope with real-world sensoric input and generate real-world motor action. I beg the reader to recall my brief intuitive description of how a toad perceives a worm, out there in its real habitat. No current simulation tool can capture the unpredictability and richness of real-world agent-environment interactions. A mutually bene cial exchange of inspirations and techniques already exists between control theory, robotics, neural networks research, and brain science (overviews in Kawato & Gomi 1992, Miall 1995 and Dean & Cruse 1995). Within this active and varied eld, probably the most common theme is the motion control of a manipulator arm, often guided by visual input. Two exceptions are Zalama et al 1995, where techniques that had been originally developed for a manipulator reaching task are applied to targeting tasks of a mobile roboter, and Baloch & Waxman 1991, where an impressive variety of biologically inspired mechanisms for visual object recognition, learning, expections, \emotional" states, and conditioning are combined in a neural network based control system for a mobile robot, MAVIN, which targets or avoids a number of di erent kinds of \objects" presented as pictograms. A more speci c purpose of this article was to promote for brain research a special 15 branch of robotics, namely, behavior-oriented robotics. A theoretical reason for favouring behavior-based robots over more traditional ones lies in the committment to \integrated" designs, with tight internal and agent-environment feedback loops. As a result, traditional distinctions between sensing, representation, planning, and motor execution dissolves. Likewise, biological brains apparently respond to stimuli in an integrated fashion, where almost all concerned brain subsystems interact in some shared time window through mutual feedback, providing an apparently holistic system answer. A comparison of the \biocybernetically traditional" robot MAVIN with BK will help to illustrate this argument. MAVIN has (respectively, builds) internal representations of the kinds of objects it can (or learns to) recognize. These representations are particular nodes in a connectionist network. MAVIN's action control depends on a stable object recognition, i.e. activation of the appropriate node. Visual object recognition is a notoriously di cult task. MAVIN requires an external, bulky, specialized image processor for real-time visual feature extraction. Furthermore, the task of object recognition is essentially reduced to recognizing standardized pictograms that stand out brightly from the optical background of lab walls, equipment, shacks etc. MAVIN moves on a preprogrammed path and thus does not need to navigate or avoid obstacles. In fact, from the perspective of behaviorbased robotics, MAVIN would hardly be called a \situated agent". By contrast, the Black Knight does not \recognize" objects in any traditional sense of the word. Giving up this ability has made possible a design which is simpler, cheaper and lighter than MAVIN's by at least two orders of magnitude, and which can be called \situated" with some justi cation. However, for the sake of fairness I must point out that MAVIN features several advanced learning mechanisms, whereas BK is hardwired. Another reason for my liking of behavior-based robots is aesthetical. To the human observer, motor action of behavior-based robots look surprisingly \animalistic". It is typically swift, smooth, displays natural-looking moments of \hesitation", \orientation" or \motivational con ict", or sometimes gets stuck in pathetically stubborn \e orts" to achieve something. Of course, such anthropomorphic interpretations are scienti cally worthless. Yet I confess that I was turned into a behavior-oriented roboticist through watching videos of arti cial MIT Lab insects. One should be aware, however, that behavior-oriented robotics are still very much at the bottom part of the alleged bottom-up approach to understanding intelligence. Behavior-based robots do not yet support any \higher" cognitive functions, let alone language or abstract reasoning. Their capabilities to represent or memorize world information are poor. Rarely are they endowed with even such seemingly elementary abilities like representing some kind of \map" of the environment (noteworthy exceptions are e.g. Engels & Schoner 1995 and Zimmer 1996). Behavior-based robots should be understood as modest approximations to insects at best { albeit to complete insects, which is quite immodest. The most speci c purpose of this article, nally, was to promote a particular family of approaches within the behavior-oriented school of robotics, namely, those in which mobile agents are understood and designed as dynamical systems. Adopting the mathematical framework of dynamical systems theory is a relatively recent development in behaviororiented robotics (Smithers 1995, Beer 1995), which methodologically connects this eld to similar strands in contemporary cognitive science (Smith & Thelen 1993, van Gelder & Port 1995) and, importantly, to biocybernetics. I have explored in detail elsewhere (Jaeger 16 1996b) the pro's and con's for a dynamical systems perspective on intelligent agents. At the present occasion, I just would like to recall from the previous section the observation that dynamical systems, unlike algorithms, but like biological brains, inherently possess traits of multi-functionality. Obviously, robot control programs of the kind sketched in this article should not be considered models of biological brains. Exceptions to this rule are robots dedicated to test biological theories concerning basic sensomotoric coordinations on the level of insect locomotion, as exempli ed in the beautiful studies on stick insect walking performed by Cruse et al (1995) or desert ant navigation (XX 1997, reference will be supplied when/if this paper is accepted). However, no robot exists today that exhibits anything amounting to the full behavioral repertoire of an insect, or the complexity and robustness of even a single vertebrate behavior. Still, robot experiments seem to me invaluable sources of inspiration and insight for brain research. Robots allow { even force { the experimenter to address the problem of how a complete behavior control system works in situ. Although it can be argued that no concrete biological brain theory can spring from exploring artifacts, general insights that also hold for biological brain models can be gained. These insights constrain the makeup of potential brain models under the auspices of completeness, integratedness, and situatedness. For example, working with BK has taught me the following lessons: Activity can replace representation: In order to cope with di cult and changing lighting conditions, BK relies not on high-quality sensors, involved sensor preprocessing or detailed classi cations of world states, as classically designed robots do. Instead, BK sorts things out by acting them out. This results in a simple control program and robust performance, albeit at the cost of frequent \half-tries" and occasional episodes of getting stuck in a behavioral \garden path". Timing is crucial: For a behavior control system which allows the robot to be continually and smoothly active, without segmenting the performance into a discrete sequence of actions, the temporal characteristics of all control processes are obviously important. It turned out in working with BK that timing { changes of rate and changes of changes of rate of almost every variable at every time { is not just of some, but is of paramount importance. Understanding what was going wrong and what was going right amounted in most instances to understanding time constants and variable input parameters that took the formal position of time constants. Activation dynamics matters as much as target dynamics: I naively used to believe that the execution control of ongoing motor activity were a hard control problem where a ne-tuned timing is required, while the buildup and fadeout of \motivational" forces were a simpler a air. But the dynamics of \motivational" variables (i.e. activation parameters in the \dual dynamics" framework) turned out to be as crucial, and as involved, as the execution dynamics (i.e. the \target dynamics"). Furthermore, in BK the activation dynamics is of central importance as it is the sole carrier of bifurcations that enable sequences of qualitatively di erent actions. I cannot judge how relevant these particular observations are for brain research. Be that as it may, they point in a de nite direction for the intuitive understanding, and the formal modeling, of behavior control systems, biological or arti cial. Furthermore, these 17 observations could not have been obtained from any experience other than that of trying to make a robot work. For readers who feel tempted to start building their own robot, I would like to recommend Jones' and Flynn's (1993) \Mobile Robots: Inspirations to Implementation", which also lists suppliers of robot equipment. Acknowledgments. This paper was triggered by the exceptionally inspiring, 8-day, interdisciplinary, national workshop \Wege ins Hirn" (Routes into the Brain) held at the monastery of Seeon in September 1996. Many hearty thanks to the Ernst P oppel, Kerstin Schill and Thomas Christaller who made it happen. Furthermore, I wish to express my deep gratitude to Luc Steels, Peter Stuer and Dany Vereertbrugghen, who let me use the Black Knight and their ne robotics environment, and who generally supported me in every conceivable way. Test readers Uwe Zimmer and Andreas Birk gave valuable hints for improving this paper. Finally, thanks to Rolf Pfeifer, Christian Scheier, and Tim van Gelder for many an insightful discussion, and to my home institution GMD for a postdoctoral grant which gives me both the means and the freedom to pursue my own chosen work. A Dual dynamics: the formal model Here I present a core version of the formal DD model. See g.2 for a graphical schema. A more elaborate version is described in Jaeger 1996a. First I describe a single elementary behavior. The target dynamics of an elementary behavior Bj yields a vector-valued target trajectory gj(t), where each vector component represents the target trajectory for a particular e ector. The target dynamics is expressed via ordinary di erential equations: _ gj = G(gj; j ; Ij(t)) (3) The quantity j is the activation of Bj (see below). Ij(t) represents time-varying input to Bj , such as sensor input. The activation dynamics of Bj consists in the trajectory of a single parameter, j , which determines whether the behavior is inhibited ( j 0) or active ( j 1), or something in between. A main idea in DD is that the activation dynamics of a behavior can bifurcate, yielding di erent \modes". This is achieved by utilizing the activations 01; : : : ; 0m of the level-1 complex behaviors B0 1; : : : ; B0 m as control parameters. The activation dynamics of Bj looks as follows: _ j = 01Tj;1( j ; gj ; Ij(t)) + : : :+ 0mTj;m( j ; gj ; Ij(t)) jk Y i=1;:::;m(1 0i)2 (4) 18 This equation needs some explanation. Tj;i( j ; gj ; Ij(t)) is a function of j , the behav-ior's target trajectory gj, and possibly other input quantities Ij(t). Each Tj;i correspondsto a particular mode of (4), which is entered when0i is roughly equal to 1, and the other0k roughly equal to 0. Tj;i can be designed explicitly and independently from the otherTj;k. However, the picture remains neat like this only as long as suitable winner-take-allmechanism between the various0i makes sure that only one of them is sizable at a giventime, which may or may not be the right thing to have in practice.The decay termjkQi=1;:::;m(10i)2 brings j back to zero when all0i vanish.As to the question of what kind of input Ij(t) is permissible for an elementary behaviorBj , DD has an iron rule: the only input which comes top-down from higher-level behaviorsis the mode control parameters0i in the activation dynamics. This rule implicitly allowsto use every conceivable non-top-down source for Ij(t), e.g., sensor input, activations ofother elementary behaviors, or their target trajectories.In order to yield an output signal from the behavior to the actuators, gj(t) and j(t)are combined via the following product assignment:_uj = kj j(gj ẑj);(5)where ẑj is the estimated current state of the actuators, kj is a gain constant, and ujis the signal issued from the behavior to the actuators. This product term implements asimple closed loop control (of P-controller type) which tries to make the actuators followthe target trajectory gj . If this proportional control proves ine cient, the product termcan be augmented to more complex control schemes, e.g. PID-control. The DD scheme isnot committed in this term to a particular kind of control.Taken all together, the DD model of an elementary behavior consists of the equations(3), (4), and (5).Now let us turn to the complete picture. The DD scheme allows to construct multi-level architectures with any number of levels. Since higher levels { the top level excepted{ are formally similar to each other, I can restrict this presentation to the case of a three-level architecture with l second-level complex behaviors B001 ; : : : ; B00l , m rst-level complexbehaviors B01; : : : ; B0m, and n elementary behaviors B1; : : : ; Bn.While in the fullyedged DD model, complex behaviors are entirely similar to ele-mentary ones, we will present here a simpli ed case where complex behaviors are reducedto an activation dynamics. Basically this means that their target dynamics is trivially1 and thus can be omitted, together with the product term. This has turned out to besu cient for simple 2-df vehicles with simple tasks. For a rst-level complex behavior B0i,we get_0i =001T 0i;1( 0i; I 0i(t)) + : : :+ 00l T 0i;l( 0i; I 0i)0ik Yh=1;:::;l(100h)2(6)The activation dynamics of a second-level complex behavior B00h has the form_00h = T 00h ( 00h; I 00h(t))(7)The iron rule concerning inputs mentioned in the previous section transfers to complexbehaviors. Thus, I 0i(t) and I 00h(t) can be virtually anything provided it does not come fromhigher levels.19 References[1] Vera A.H. and H.A. Simon. Situated action: A symbolic interpretation. CognitiveScience, 17(1):7{48, 1993.[2] G.P. Baerends. On drive, con ict and instinct, and the functional organization ofbehavior. In M.A. Corner and D.F. Swaab, editors, Perspectives in Brain Research.Proc. of the 9th Int. Summer School of Brain Research, Amsterdam, August 1975,pages 427{447. Elsevier, Amsterdam, 1975.[3] A.A. Baloch and A.M. Waxman. Visual learning, adaptive expectations, and be-havioral conditioning of the mobile robot MAVIN. Neural Networks, 4(3):271{302,1991.[4] R. Beer. A dynamical systems perspective on agent-environment interaction. Arti cialIntelligence, 72(1/2):173{216, 1995.[5] G.P. Bingham. Dynamics and the problem of visual event recognition. In R. Port andT. van Gelder, editors, Mind as Motion: Explorations in the Dynamics of Cognition,chapter 14, pages 403{448. MIT Press/Bradford Books, 1995.[6] J.R. Bloedel. Functional heterogeneity with structural homogeneity: How does thecerebellum operate? Behavioral and Brain Sciences, 15:666{678, 1992.[7] V. Braitenberg. Vehicles: Experiments in Synthetic Psychology. MIT Press, 1984.[8] R. A. Brooks. Intelligence without reason. A.I. Memo 1293, MIT AI Lab, 1991.[9] R.A. Brooks. The whole iguana. In M. Brady, editor, Robotics Science, pages 432{456.MIT Press, Cambridge, Mass., 1989.[10] D. Bullock and S. Grossberg. Neural dynamics of planned arm movements: emergentinvariants and speed-accuracy properties during trajectory formation. PsychologicalReview, 95:49{90, 1988.[11] C.B. Congdon, M. Huber, D. Kortenkamp, C. Bidlack, Ch. Cohen, S. Hu man,F. Koss, U. Raschke, T. Weymouth, K. Konolige, K. Myers, A. Sa otti, E. Rus-pini, and D. Musto. CARMEL vs. Flakey: a comparison of two robots. Technicalreport rc-92-01, AAAI, 1992.[12] F.J. Corbacho and M.A. Arbib. Learning to detour. Adaptive Behavior, 3(4):419{468,1995.[13] H. Cruse, D.E. Brunn, Ch. Bartling, J. Dean, M. Dreifert, T. Kindermann, andJ. Schmitz. Walking: A complex behavior controlled by simple networks. AdaptiveBehavior, 3(4):385{418, 1995.[14] J. Dean and H. Cruse. Motor pattern generation. In M.A. Arbib, editor, The Handbookof Brain Theory and Neural Networks, pages 600{605. MIT Press/Bradford Books,1995.20 [15] J.L. Deneubourg, S. Goss, N. Franks, A. Sendova-Franks, C. Detrain, and L. Chr etien.The dynamics of collective sorting: Robot-like ants and ant-like robots. In J.A.Meyer and S. Wilson, editors, From Animals to Animats 1. Proceedings of the FirstInternational Conference on the Simulation of Adaptive Behavior, pages 356{365.MIT Press, 1991.[16] Ch. Engels and G. Schoner. Dynamic elds endow behavior-based robots with repre-sentations. Robotics & Autonomous Systems, 14:55{77, 1995.[17] J.-P. Ewert, T.W. Beneke, E. Schurg-Pfei er, W.W. Schwippert, and A. Weerasuriya.Sensorimotor processes that underlie feeding behavior in tetrapods. In V.L. Bels,M. Chardon, and P. Vandewalle, editors, Biomechanics of Feeding in Vertebrates,volume 18 of Comparative and Environmental Physiology, pages 119{161. SpringerVerlag, 1994.[18] S. Grossberg. Neural dynamics of motion perception, recognition learning, and spatialattention. In R. Port and T. van Gelder, editors, Mind as Motion: Explorations inthe Dynamics of Cognition, chapter 15, pages 449{490. MIT Press/Bradford Books,1995.[19] I. Harvey, P. Husbands, and D. Cli . Seeing the light: Arti cial evolution, real vision.In D. Cli et al, editor, Proc. 3rd Int. Conf. on the Simulation of Adaptive Behavior,pages 392{405. MIT Press/Bradford Books, 1994.[20] H. Jaeger. Modulated modules: designing behaviors as dynamical systems.Arbeitspapiere der GMD 927, ftp://ftp.gmd.de/ai-research/Publications/1995/jae-ger.95.modmod.ps.gz 927, GMD, St. Augustin, 1995.[21] H. Jaeger. The dual dynamics design scheme for behavior-based robots: A tuto-rial. Arbeitspapiere der GMD 966, GMD { Forschungszentrum InformationstechnikGmbH, St. Augustin, 1996.[22] H. Jaeger. Dynamische Systeme in der Kognitionswissenschaft. Kognitionswis-senschaft, 5(4), 1996.[23] J.L. Jones and A.M. Flynn. Mobile robots: inspirations to implementation. Peters,1993.[24] M. Kawato and H. Gomi. The cerebellum and VOR/OKR learning models. Trendsin Neuroscience, 15(11):445{453, 1992.[25] F. Kirchner. KURT: A prototype study of an autonomous mobile robot for seweragesystem inspection. Arbeitspapiere der GMD 989, GMD, GMD ForschungszentrumInformationstechnik GmbH, Sankt Augustin, 1996.[26] P. Maes. Situated agents can have goals. Robotics and Autonomous Systems, 6:49{70,1990.[27] H. Maturana and F.J. Varela. Der Baum der Erkenntnis (German transl., original:El arbol del concocimiento.). Scherz, Bern/Munchen, 1987 (appeared originally in1984).21 [28] D.A. McCrea. Can sense be made of spinal interneuron circuits? Behavioral andBrain Sciences, 15:633{643, 1992.[29] D. McFarland. Towards robot cooperation. In D. Cli , editor, From Animals to Ani-mats III: Proceedings of the 3rd International Conference on Simulation of AdaptiveBehavior, pages 440{444. Bradford/MIT Press, 1994.[30] R.C. Miall. Motor control, biological and theoretical. In M.A. Arbib, editor, TheHandbook of Brain Theory and Neural Networks, pages 597{600. MIT Press/BradfordBooks, 1995.[31] R.C. Miall, D.J. Weir, D.M. Wolpert, and J.F. Stein. Is the cerebellum a Smithpredictor? J. of Motor Behavior, 25(3):203{216, 1993.[32] M. Minsky. The Society of Mind. Pan Books, London, originally appeared 1985 inthe united states edition, 1987.[33] F. Pasemann. Neuromodules: A dynamical systems approach to brain modelling.In H.J. Herrman, D.E. Wolf, and E. Poppel, editors, Proceedings of the Workshop:Supercomputing in Brain Research { from Tomography to Neural Networks,, pages331{348. HLRZ, KFA Julich, Germany, World Scienti c, 1994.[34] F. Pasemann. Reprasentation ohne Reprasentation { Uberlegungen zu einer Neuro-dynamik modularer kognitiver Systeme. In O. Breidbach, editor, Innere Reprasen-tationen { Neuere Ergebnissse der Hirnforschung. Suhrkamp Verlag, Frankfurt, 1996(to appear).[35] R. Pfeifer and Ch. Scheier. An Introduction to New Arti cial Intelligence. Submittedto MIT Press, 1996.[36] H. Shimizu. Biological autonomy: The self-creation of constraints. Applied Mathe-matics and Computation, 56:177{201, 1993.[37] L.B. Smith and E. Thelen, editors. A Dynamic Systems Approach to Development:Applications. Bradford/MIT Press, Cambridge, Mass., 1993.[38] T. Smithers. On quantitative performance measures of robot behavior. Robotics andAutonomous Systems, 15(1/2):107{134, 1995.[39] L. Steels. Building agents out of autonomous behavior systems. In L. Steels andR.A. Brooks, editors, The \Arti cial Life" Route to \Arti cial Intelligence": BuildingSituated Embodied Agents. Lawrence Erlbaum, 1993.[40] L.A. Suchman. Plans and Situated Actions. Cambridge University Press, Cambridge,1987.[41] A. Thompson. Evolving electronic robot controllers that exploit hardware resources.In Proc. of the 3rd Europ. Conf. on Arti cial Life (ECAL95. Springer Verlag, 1995.[42] T. Tyrrell. The use of hierarchies for action selection. Adaptive Behavior, 1(4):387{420, 1993.22 [43] T. van Gelder. The dynamical hypothesis in cognitive science. Behavioural and BrainSciences, to appear.[44] T. van Gelder and R. Port, editors. Mind as Motion: Explorations in the Dynamicsof Cognition. Bradford/MIT Press, 1995.[45] M. Woolridge and N.R. Jennings. Intelligent agents: theory and practice. The knowl-edge engineering review, 10(2):115{152, 1995.[46] Y. Yao and W.J. Freeman. A model of biological pattern recognition with spatiallychaotic dynamics. Neural Networks, 3(2):153{170, 1990.[47] E. Zalama, P. Gaudiano, and J.L. Coronado. A real-time, unsupervised neural net-work for the low-level control of a mobile robot in a nonstationary environment.Neural Networks, 8(1):103{123, 1995.[48] U.R. Zimmer. Robust world-modelling and navigation in a real world. Neurocomput-ing, 13:247{260, 1996.23

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Control of Wheeled Mobile Manipulators with Flexible Suspension Considering Wheels Slip Effects

Wheeled mobile manipulators utilize both the locomotion capabilities of the wheeled platform and manipulation capacity of the arm. While the modelling and control of such systems have previously been studied, most of them have considered robots with rigid suspension and their wheels are subject to pure rolling conditions. To relax the aforementioned limiting assumptions, this research addresses...

متن کامل

Investigation on the Effect of Different Parameters in Wheeled Mobile Robot Error (TECHNICAL NOTE)

This article has focused on evaluation and identification of effective parameters in positioning performance with an odometry approach of an omni-directional mobile robot. Although there has been research in this field, but in this paper, a new approach has been proposed for mobile robot in positioning performance. With respect to experimental investigations of different parameters in omni-dire...

متن کامل

Dynamic Modeling and Construction of a New Two-Wheeled Mobile Manipulator: Self-balancing and Climbing

Designing the self-balancing two-wheeled mobile robots and reducing undesired vibrations are of great importance. For this purpose, the majority of researches are focused on application of relatively complex control approaches without improving the robot structure. Therefore, in this paper we introduce a new two-wheeled mobile robot which, despite its relative simple structure, fulfills the req...

متن کامل

Kinematic Model of Wheeled Mobile Robots

This paper deals with the structure of the kinematic models of wheeled mobile robots (WMR). A wheeled mobile robot here considered as a planer rigid body that rides on an arbitrary number of wheels. In this paper it is shown that, for a large class of possible configurations of wheels, five types of configurations can be done namely i) fixed standard wheels, ii) steerable standard wheels, iii) ...

متن کامل

Design and Control of an Omnidirectional Mobile Robot with Steerable Omnidirectional Wheels

Applications of wheeled mobile robots have recently extended to service robots for the handicapped or the aged and industrial mobile robots working in various environments. The most popular wheeled mobile robots are equipped with two independent driving wheels. Since these robots possess 2 degrees-of-freedom (DOFs), they can rotate about any point, but cannot perform holonomic motion including ...

متن کامل

A Survey of Rover Control Systems

In this research we outline control system for wheels mobile rovers which are very important for recent and future research. The wheel type mobile rovers are one of the most practical and widespread robots. Wheels mobile rovers have a simple drive mechanism, and high energy efficiency. In this paper we present {2n+2, 0≤n≤3} wheels mobile rovers, presented by two wheels rover Super Mario, four w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996